skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Fluet, Kimberly"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Assignments based on meaningful real-world contexts have been shown to be valuable in introductory computing education. However, it can be difficult to distinguish the value of a broad context from the value of a particular instantiation of that context. In this work in progress, we report on our initial findings gathered from deployments of different pencil-puzzle-based assignments. Specifically, we have investigated the use of pencil puzzles as a contextual domain, working with instructors at eight institutions to deliver assignments appropriate to their situation and aligning with their existing materials. We then evaluate the assignments using student grades and survey responses regarding student perceptions of the assignments including self-assessed learning, given a wide array of demographic variables. Our initial results show that while there was some dependency of student responses on their prior programming experience, and female students’ feedback were more positive about one aspect, overall these types of assignments do not appear to put particular groups of students at a strong (dis)advantage. 
    more » « less
  2. Computing theory is often perceived as challenging by students, and verifying the correctness of a student’s automaton or grammar is time-consuming for instructors. Aiming to provide benefits to both students and instructors, we designed an automated feedback tool for assignments where students construct automata or grammars. Our tool, built as an extension to the widely popular JFLAP software, determines if a submission is correct, and for incorrect submissions it provides a “witness” string demonstrating the incorrectness. We studied the usage and benefits of our tool in two terms, Fall 2019 and Spring 2021. Each term, students in one section of the Introduction to Computer Science Theory course were required to use our tool for sample homework questions targeting DFAs, NFAs, RegExs, CFGs, and PDAs. In Fall 2019, this was a regular section of the course.We also collected comparison data from another section that did not use our tool but had the same instructor and homework assignments. In Spring 2021, a smaller honors section provided the perspective from this demographic. Overall, students who used the tool reported that it helped them to not only solve the homework questions (and they performed better than the comparison group) but also to better understand the underlying theory concept. They were engaged with the tool: almost all persisted with their attempts until their submission was correct despite not being able to random walk to a solution. This indicates that witness feedback, a succinct explanation of incorrectness, is effective. Additionally, it assisted instructors with assignment grading. 
    more » « less
  3. null (Ed.)
  4. null (Ed.)
    Computing theory analyzes abstract computational models to rigorously study the computational difficulty of various problems. Introductory computing theory can be challenging for undergraduate students, and the overarching goal of our research is to help students learn these computational models. The most common pedagogical tool for interacting with these models is the Java Formal Languages and Automata Package (JFLAP). We developed a JFLAP server extension, which accepts homework submissions from students, evaluates the submission as correct or incorrect, and provides a witness string when the submission is incorrect. Our extension currently provides witness feedback for deterministic finite automata, nondeterministic finite automata, regular expressions, context-free grammars, and pushdown automata. In Fall 2019, we ran a preliminary investigation on two synchronized sections (Control and Study) of the required undergraduate course Introduction to Computer Science Theory. The Study section (n = 29) used our extension for five targeted homework questions, and the Control section (n = 35) submitted these problems using traditional means. The Study section strongly outperformed the Control section with respect to the percent of perfect homework grades for the targeted homework questions. Our most interesting result was student persistence: with only the short witness string as feedback, students voluntarily persisted in submitting attempts until correct. 
    more » « less